光活性虹膜复合物的应用广泛,因为它们的应用从照明到光催化。但是,从精确度和计算成本的角度来看,这些复合物的激发状态性能预测挑战了从头开始方法,例如时间依赖性密度功能理论(TDDFT),使高吞吐量虚拟筛选(HTVS)复杂化。相反,我们利用低成本的机器学习(ML)模型来预测光活性虹膜复合物的激发状态特性。我们使用1,380个虹膜复合物的实验数据来训练和评估ML模型,并确定最佳和最可转移的模型,是从低成本密度功能理论紧密结合计算的电子结构特征训练的模型。使用这些模型,我们预测所考虑的三个激发态性能,即磷光的平均发射能,激发态寿命和发射光谱积分,具有具有或取代TDDFT的精度。我们进行特征重要性分析,以确定哪些虹膜复杂属性控制激发状态的特性,并通过明确的例子来验证这些趋势。为了证明如何将ML模型用于HTV和化学发现的加速度,我们策划了一组新型的假设虹膜络合物,并确定了新磷剂设计的有希望的配体。
translated by 谷歌翻译
机器学习(ML)加速化学发现的两个突出挑战是候选分子或材料的合成性以及ML模型训练中使用的数据的保真度。为了应对第一个挑战,我们构建了一个假设的设计空间,为3250万转型金属复合物(TMC),其中所有组成片段(即金属和配体)和配体对称性都可以合成。为了应对第二项挑战,我们在雅各布梯子的多个梯级之间的23个密度功能近似之间搜索预测的共识。为了加快这3250万TMC的筛选,我们使用有效的全局优化来样本候选低自旋发色团,同时具有低吸收能和低静态相关性。尽管在这个大化的化学空间中的潜在发色团缺乏(即$ <$ 0.01 \%),但随着ML模型在积极学习过程中的改善,我们确定了高可能性(即$> $ 10 \%)的过渡金属发色团(即$> $ 10 \%)。这代表发现的1,000倍加速度,与几天而不是几年中的发现相对应。对候选发色团的分析揭示了对CO(III)和具有更大键饱和度的大型强野配体的偏爱。我们根据时间依赖性密度功能理论计算计算帕累托前沿上有希望的发色团的吸收光谱,并验证其中三分之二是否需要激发态特性。尽管这些复合物从未经过实验探索,但它们的组成配体在文献中表现出有趣的光学特性,体现了我们构建现实的TMC设计空间和主动学习方法的有效性。
translated by 谷歌翻译
与更苛刻但准确的相关波函数理论相比,由于其成本准确性的权衡,近似密度功能理论(DFT)已成为必不可少的。然而,迄今为止,尚未确定具有通用精度的单个密度函数近似(DFA),从而导致DFT产生的数据质量的不确定性。通过电子密度拟合和转移学习,我们构建了DFA推荐使用者,该DFA选择以系统特异性方式相对于黄金标准但过度良好的耦合群集理论的DFA。我们在垂直旋转分解能量评估中证明了这种推荐的方法,用于具有挑战性的过渡金属复合物。我们的推荐人可以预测表现最佳的DFA,并产生出色的精度(约2 kcal/mol),可用于化学发现,表现优于单个传递学习模型和一组48 dFA中的单个最佳功能。我们证明了DFA推荐剂对具有独特化学的实验合成化合物的可传递性。
translated by 谷歌翻译
适当地识别和处理具有显着多参考(MR)特征的分子和材料对于在虚拟高通量筛选(VHT)中实现高数据保真度至关重要。然而,使用单一功能的近似密度泛函理论(DFT)进行大多数VHT。尽管发展了许多MR诊断,但这种诊断的单一价值的程度表明了对化学性质预测的MR效应不是很好的。我们评估超过10,000个过渡金属配合物(TMC)的MR诊断方法,并与有机分子中的那些进行比较。我们透露,只有一些MR诊断程序可在这些材料空间上转移。通过研究MR特征对涉及多个潜在能量表面的化学性质(即,MR效应)的影响(即绝热自旋分裂,$ \ DELTA E_ \ MATHRM {HL} $和电离潜力,IP),我们观察到这一点先生效应的取消超过积累。 MR特征的差异比预测物业预测中MR效应的先生特征的总程度更重要。通过这种观察,我们建立转移学习模型,直接预测CCSD(T)-Level绝热$ \ Delta e_ \ Mathrm {H-L} $和IP从较低的理论。通过将这些模型与不确定量化和多级建模相结合,我们引入了一种多管策略,可将数据采集加速至少三个,同时实现鲁棒VHT的化学精度(即1 kcal / mol)。
translated by 谷歌翻译
机器学习(ML) - 基卡化的发现需要大量的高保真数据来揭示预测结构性质关系。对于对材料发现的兴趣的许多性质,数据生成的具体性和高成本导致数据景观几乎没有人居住和可疑质量。开始克服这些限制的数据驱动技术包括在密度函数理论中使用共识,开发新功能或加速电子结构理论,以及检测到计算要求苛刻的方法是最必要的。当无法可靠地模拟属性时,大型实验数据集可用于培训ML模型。在没有手动策策的情况下,越来越复杂的自然语言处理和自动图像分析使得可以从文献中学习结构性质关系。在这些数据集上培训的模型将随着社区反馈而改善。
translated by 谷歌翻译
The paper presents a cross-domain review analysis on four popular review datasets: Amazon, Yelp, Steam, IMDb. The analysis is performed using Hadoop and Spark, which allows for efficient and scalable processing of large datasets. By examining close to 12 million reviews from these four online forums, we hope to uncover interesting trends in sales and customer sentiment over the years. Our analysis will include a study of the number of reviews and their distribution over time, as well as an examination of the relationship between various review attributes such as upvotes, creation time, rating, and sentiment. By comparing the reviews across different domains, we hope to gain insight into the factors that drive customer satisfaction and engagement in different product categories.
translated by 谷歌翻译
Automated offensive language detection is essential in combating the spread of hate speech, particularly in social media. This paper describes our work on Offensive Language Identification in low resource Indic language Marathi. The problem is formulated as a text classification task to identify a tweet as offensive or non-offensive. We evaluate different mono-lingual and multi-lingual BERT models on this classification task, focusing on BERT models pre-trained with social media datasets. We compare the performance of MuRIL, MahaTweetBERT, MahaTweetBERT-Hateful, and MahaBERT on the HASOC 2022 test set. We also explore external data augmentation from other existing Marathi hate speech corpus HASOC 2021 and L3Cube-MahaHate. The MahaTweetBERT, a BERT model, pre-trained on Marathi tweets when fine-tuned on the combined dataset (HASOC 2021 + HASOC 2022 + MahaHate), outperforms all models with an F1 score of 98.43 on the HASOC 2022 test set. With this, we also provide a new state-of-the-art result on HASOC 2022 / MOLD v2 test set.
translated by 谷歌翻译
We consider the problem of continually releasing an estimate of the population mean of a stream of samples that is user-level differentially private (DP). At each time instant, a user contributes a sample, and the users can arrive in arbitrary order. Until now these requirements of continual release and user-level privacy were considered in isolation. But, in practice, both these requirements come together as the users often contribute data repeatedly and multiple queries are made. We provide an algorithm that outputs a mean estimate at every time instant $t$ such that the overall release is user-level $\varepsilon$-DP and has the following error guarantee: Denoting by $M_t$ the maximum number of samples contributed by a user, as long as $\tilde{\Omega}(1/\varepsilon)$ users have $M_t/2$ samples each, the error at time $t$ is $\tilde{O}(1/\sqrt{t}+\sqrt{M}_t/t\varepsilon)$. This is a universal error guarantee which is valid for all arrival patterns of the users. Furthermore, it (almost) matches the existing lower bounds for the single-release setting at all time instants when users have contributed equal number of samples.
translated by 谷歌翻译
Speech-centric machine learning systems have revolutionized many leading domains ranging from transportation and healthcare to education and defense, profoundly changing how people live, work, and interact with each other. However, recent studies have demonstrated that many speech-centric ML systems may need to be considered more trustworthy for broader deployment. Specifically, concerns over privacy breaches, discriminating performance, and vulnerability to adversarial attacks have all been discovered in ML research fields. In order to address the above challenges and risks, a significant number of efforts have been made to ensure these ML systems are trustworthy, especially private, safe, and fair. In this paper, we conduct the first comprehensive survey on speech-centric trustworthy ML topics related to privacy, safety, and fairness. In addition to serving as a summary report for the research community, we point out several promising future research directions to inspire the researchers who wish to explore further in this area.
translated by 谷歌翻译
The automated synthesis of correct-by-construction Boolean functions from logical specifications is known as the Boolean Functional Synthesis (BFS) problem. BFS has many application areas that range from software engineering to circuit design. In this paper, we introduce a tool BNSynth, that is the first to solve the BFS problem under a given bound on the solution space. Bounding the solution space induces the synthesis of smaller functions that benefit resource constrained areas such as circuit design. BNSynth uses a counter-example guided, neural approach to solve the bounded BFS problem. Initial results show promise in synthesizing smaller solutions; we observe at least \textbf{3.2X} (and up to \textbf{24X}) improvement in the reduction of solution size on average, as compared to state of the art tools on our benchmarks. BNSynth is available on GitHub under an open source license.
translated by 谷歌翻译